28 research outputs found

    Abstract Pattern Image Generation using Generative Adversarial Networks

    Get PDF
    Abstract pattern is very commonly used in the textile and fashion industry. Pattern design is an area where designers need to come up with new and attractive patterns every day. It is very difficult to find employees with a sufficient creative mindset and the necessary skills to come up with new unseen attractive designs. Therefore, it would be ideal to identify a process that would allow for these patterns to be generated on their own with little to no human interaction. This can be achieved using deep learning models and techniques. One of the most recent and promising tools to solve this type of problem is Generative Adversarial Networks (GANs). In this paper, we investigate the suitability of GAN in producing abstract patterns. We achieve this by generating abstract design patterns using the two most popular GANs, namely Deep Convolutional GAN and Wasserstein GAN. By identifying the best-performing model after training using hyperparameter optimization and generating some output patterns we show that Wasserstein GAN is superior to Deep Convolutional GAN

    Speech enhancement Algorithm based on super-Gaussian modeling and orthogonal polynomials

    Get PDF
    © 2020 Lippincott Williams and Wilkins. All rights reserved. Different types of noise from the surrounding always interfere with speech and produce annoying signals for the human auditory system. To exchange speech information in a noisy environment, speech quality and intelligibility must be maintained, which is a challenging task. In most speech enhancement algorithms, the speech signal is characterized by Gaussian or super-Gaussian models, and noise is characterized by a Gaussian prior. However, these assumptions do not always hold in real-life situations, thereby negatively affecting the estimation, and eventually, the performance of the enhancement algorithm. Accordingly, this paper focuses on deriving an optimum low-distortion estimator with models that fit well with speech and noise data signals. This estimator provides minimum levels of speech distortion and residual noise with additional improvements in speech perceptual aspects via four key steps. First, a recent transform based on an orthogonal polynomial is used to transform the observation signal into a transform domain. Second, the noise classification based on feature extraction is adopted to find accurate and mutable models for noise signals. Third, two stages of nonlinear and linear estimators based on the minimum mean square error (MMSE) and new models for speech and noise are derived to estimate a clean speech signal. Finally, the estimated speech signal in the time domain is determined by considering the inverse of the orthogonal transform. The results show that the average classification accuracy of the proposed approach is 99.43%. In addition, the proposed algorithm significantly outperforms existing speech estimators in terms of quality and intelligibility measures

    Models, detection methods, and challenges in DC arc fault: A review

    Get PDF
    The power generation of solar photovoltaic (PV) technology is being implemented in every nation worldwide due to its environmentally clean characteristics. Therefore, PV technology is significantly growing in present applications and usage of PV power systems. Despite the strength of the PV arrays in power systems, the arrays remain susceptible to certain faults. An effective supply requires economic returns, the security of the equipment and humans, precise fault identification, diagnosis, and interruption tools. Meanwhile, the faults in unidentified arc lead to serious fire hazard to commercial, residential, and utility-scale PV systems. To ensure a secure and dependable distribution of electricity, the detection of such hazard is crucial in the early phases of the distribution. In this paper, a detailed review on modern approaches for the identification of DC arc faults in PV is presented. In addition, a thorough comparison is performed between various DC arc-fault models, characteristics, and approaches used for the identification of the faults

    Fast temporal video segmentation based on Krawtchouk-Tchebichef moments

    Get PDF
    With the increasing growth of multimedia data, the current real-world video sharing websites are being huge in repository size, more specifically video databases. This growth necessitates to look for superior techniques in processing video because video contains a lot of useful information. Temporal video segmentation (TVS) is considered essential stage in content-based video indexing and retrieval system. TVS aims to detect boundaries between successive video shots. TVS algorithm design is still challenging because most of the recent methods are unable to achieve fast and robust detection. In this regard, this paper proposes a TVS algorithm with high precision and recall values, and low computation cost for detecting different types of video transitions. The proposed algorithm is based on orthogonal moments which are considered as features to detect transitions. To increase the speed of the TVS algorithm as well as the accuracy, fast block processing and embedded orthogonal polynomial algorithms are utilized to extract features. This utilization will lead to extract multiple local features with low computational cost. Support vector machine (SVM) classifier is used to detect transitions. Specifically, the hard transitions are detected by the trained SVM model. The proposed algorithm has been evaluated on four datasets. In addition, the performance of the proposed algorithm is compared to several state-of-the-art TVS algorithms. Experimental results demonstrated that the proposed algorithm performance improvements in terms of recall, precision, and F1-score are within the ranges (1.31 - 2.58), (1.53 - 4.28), and (1.41 - 3.03), respectively. Moreover, the proposed method shows low computation cost which is 2% of real-time

    Signal compression and enhancement using a new orthogonal-polynomial-based discrete transform

    Get PDF
    Discrete orthogonal functions are important tools in digital signal processing. These functions received considerable attention in the last few decades. This study proposes a new set of orthogonal functions called discrete Krawtchouk-Tchebichef transform (DKTT). Two traditional orthogonal polynomials, namely, Krawtchouk and Tchebichef, are combined to form DKTT. The theoretical and mathematical frameworks of the proposed transform are provided. DKTT was tested using speech and image signals from a well-known database under clean and noisy environments. DKTT was applied in a speech enhancement algorithm to evaluate the efficient removal of noise from speech signal. The performance of DKTT was compared with that of standard transforms. Different types of distance (similarity index) and objective measures in terms of image quality, speech quality, and speech intelligibility assessments were used for comparison. Experimental tests show that DKTT exhibited remarkable achievements and excellent results in signal compression and speech enhancement. Therefore, DKTT can be considered as a new set of orthogonal functions for futuristic applications of signal processing

    Fast shot boundary detection based on separable moments and support vector machine

    Get PDF
    The large number of visual applications in multimedia sharing websites and social networks contribute to the increasing amounts of multimedia data in cyberspace. Video data is a rich source of information and considered the most demanding in terms of storage space. With the huge development of digital video production, video management becomes a challenging task. Video content analysis (VCA) aims to provide big data solutions by automating the video management. To this end, shot boundary detection (SBD) is considered an essential step in VCA. It aims to partition the video sequence into shots by detecting shot transitions. High computational cost in transition detection is considered a bottleneck for real-time applications. Thus, in this paper, a balance between detection accuracy and speed for SBD is addressed by presenting a new method for fast video processing. The proposed SBD framework is based on the concept of candidate segment selection with frame active area and separable moments. First, for each frame, the active area is selected such that only the informative content is considered. This leads to a reduction in the computational cost and disturbance factors. Second, for each active area, the moments are computed using orthogonal polynomials. Then, an adaptive threshold and inequality criteria are used to eliminate most of the non-transition frames and preserve candidate segments. For further elimination, two rounds of bisection comparisons are applied. As a result, the computational cost is reduced in the subsequent stages. Finally, machine learning statistics based on the support vector machine is implemented to detect the cut transitions. The enhancement of the proposed fast video processing method over existing methods in terms of computational complexity and accuracy is verified. The average improvements in terms of frame percentage and transition accuracy percentage are 1.63% and 2.05%, respectively. Moreover, for the proposed SBD algorithm, a comparative study is performed with state-of-the-art algorithms. The comparison results confirm the superiority of the proposed algorithm in computation time with improvement of over 38%

    A fast feature extraction algorithm for image and video processing

    Get PDF
    Medical images and videos are utilized to discover, diagnose and treat diseases. Managing, storing, and retrieving stored images effectively are considered important topics. The rapid growth of multimedia data, including medical images and videos, has caused a swift rise in data transmission volume and repository size. Multimedia data contains useful information; however, it consumes an enormous storage space. Therefore, high processing time for that sheer volume of data will be required. Image and video applications demand for reduction in computational cost (processing time) when extracting features. This paper introduces a novel method to compute transform coefficients (features) from images or video frames. These features are used to represent the local visual content of images and video frames. We compared the proposed method with the traditional approach of feature extraction using a standard image technique. Furthermore, the proposed method is employed for shot boundary detection (SBD) applications to detect transitions in video frames. The standard TRECVID 2005, 2006, and 2007 video datasets are used to evaluate the performance of the SBD applications. The achieved results show that the proposed algorithm significantly reduces the computational cost in comparison to the traditional method

    Image edge detection operators based on orthogonal polynomials

    Get PDF
    Orthogonal polynomials (OPs) are beneficial for image processing. OPs are used to reflect an image or a scene to a moment domain, and moments are subsequently used to extract object contours utilised in various applications. In this study, OP-based edge detection operators are introduced to replace traditional convolution-based and block processing methods with direct matrix multiplication. A mathematical model with empirical study results is established to investigate the performance of the proposed detectors compared with that of traditional algorithms, such as Sobel and Canny operators. The proposed operators are then evaluated by using entire images from a well-known data set. Experimental results reveal that the proposed operator achieves a more favourable interpretation, especially for images distorted by motion effects, than traditional methods do

    Fast recursive computation of Krawtchouk polynomials

    Get PDF
    Krawtchouk polynomials (KPs) and their moments are used widely in the field of signal processing for their superior discriminatory properties. This study proposes a new fast recursive algorithm to compute Krawtchouk polynomial coefficients (KPCs). This algorithm is based on the symmetry property of KPCs along the primary and secondary diagonals of the polynomial array. The n−x plane of the KP array is partitioned into four triangles, which are symmetrical across the primary and secondary diagonals. The proposed algorithm computes the KPCs for only one triangle (partition), while the coefficients of the other three triangles (partitions) can be computed using the derived symmetry properties of the KP. Therefore, only N / 4 recursion times are required. The proposed algorithm can also be used to compute polynomial coefficients for different values of the parameter p in interval (0, 1). The performance of the proposed algorithm is compared with that in previous literature in terms of image reconstruction error, polynomial size, and computation cost. Moreover, the proposed algorithm is applied in a face recognition system to determine the impact of parameter p on feature extraction ability. Simulation results show that the proposed algorithm has a remarkable advantage over other existing algorithms for a wide range of parameters p and polynomial size N, especially in reducing the computation time and the number of operations utilized

    Shot boundary detection based on orthogonal polynomial

    Get PDF
    Shot boundary detection (SBD) is a substantial step in video content analysis, indexing, retrieval, and summarization. SBD is the process of automatically partitioning video into its basic units, known as shots, through detecting transitions between shots. The design of SBD algorithms developed from simple feature comparison to rigorous probabilistic and using of complex models. Nevertheless, accelerate the detection of transitions with higher accuracy need to be improved. Extensive research has employed orthogonal polynomial (OP) and their moments in computer vision and signal processing owing to their powerful performance in analyzing signals. A new SBD algorithm based on OP has been proposed in this paper. The Features are derived from orthogonal transform domain (moments) to detect the hard transitions in video sequences. Moments are used because of their ability to represent signal (video frame) without information redundancy. These features are the moments of smoothed and gradients of video frames. The moments are computed using a developed OP which is squared Krawtchouk-Tchebichef polynomial. These moments (smoothed and gradients) are fused to form a feature vector. Finally, the support vector machine is utilized to detect hard transitions. In addition, a comparison between the proposed algorithm and other state-of-the-art algorithms is performed to reinforce the capability of the proposed work. The proposed algorithm is examined using three well-known datasets which are TRECVID2005, TRECVID2006, and TRECVID2007. The outcomes of the comparative analysis show the superior performance of the proposed algorithm against other existing algorithms
    corecore